Skip to main content

Amazon S3 target overview

When using Amazon S3 as a target in a Replicate task, both the Full Load and Change Processing data are written to data files. Depending on the endpoint settings, the data file format can be CSV, JSON or Parquet. While the explanations in this section relate to CSV files, the same is true for JSON and Parquet files.

Full Load files are named using incremental counters e.g. LOAD00001.csv, LOAD 00002.csv, etc. whereas Apply Changes files are named using timestamps e.g. 20141029-1134010000.csv.

Information note

When Parallel Load is used, the naming convention for Full Load files is slightly different:

LOAD_$(SegmenteID)_$(IncreasingCounter)

Example:

LOAD_1_00000001 | LOAD_1_00000002 | LOAD_1_00000003 | LOAD_2_00000001 | LOAD_2_00000002

Information note

When the Create metadata files in the target folder option is enabled, a corresponding metadata file is created using the same naming format, but with a .dfm extension.

For each source table, a folder is created in the specified Amazon S3 bucket. The data files are created on the Replicate Server machine and are then uploaded to the specified Amazon S3 bucket once the File Attributes (Full Load) and Change Processing upload conditions have been met.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!